The isolated man does not develop any intellectual power. It is necessary for him to be immersed in an environment […] He may then perhaps do a little research of his own and make a very few discoveries […] the search for new techniques must be regarded as carried out by the human community as a whole, rather than by individuals.
Alan Turing, 1948, quoted in Nature v482
Science isn’t about what’s known. Nor is it about what isn’t known. At its most basic level, SCIENCE is nothing more than a process of playing games and making puzzles that may or may not tell us something about the world or ourselves in that world. When thought of in this way, it’s obvious that we all ‘do science‘ hundreds of times a day every day, which is about discovering and exploring through interaction. When interaction is made conscious and combined with reflection, that is ‘science‘.
Beau Lotto (on « Street Science« )
There are some things that you, the reader of this preface, know to be true, and others that you know to be false; yet, despite this extensive knowledge that you have, there remain many things whose truth or falsity is not known to you. We say that you are uncertain about them. You are uncertain, to varying degrees, about everything in the future; much of the past is hidden from you; and there is a lot of the present about which you do not have full information. Uncertainty is everywhere and you cannot escape from it. Truth and falsity are the subjects of logic, which has a long history going back at least to classical Greece. The object of this book is to tell you about work that has been done in the twentieth century about uncertainty.
Research is carried out by individuals and often the best research is the product of one person thinking deeply on their own. For example, relativity is essentially the result of Einstein’s thoughts. Yet, in a sense, the person is irrelevant, for most scientists feel that if he had not discovered relativity, then someone else would; that relativity is somehow ‘‘out there’’ waiting to be revealed, the revelation necessarily being made by human beings but not necessarily by that human being. This may not be true in the arts so that, for example, if Shakespeare had not written his plays it would not follow that someone else would have produced equivalent writing. Science is a collective activity, much more so than art, and although some scientists stand out from the rest, the character of science depends to only a very small extent on individuals and what little effect they have disappears over time as their work is absorbed into the work of others.
Dennis Lindley, Understanding uncertainty
Je ne sais pas pour vous, mais moi, quand je lis ça, j’ai envie de m’installer dans un fauteuil et lire toute la journée.
Genetics is the study of genes. When they were « invented » (i.e. conceptualized) in the 19th century, genes were defined as units of heredity. Thanks to the revolution of molecular biology, we now know that a gene is a section of DNA (although it is still possible to debate on the precise meaning of this statement). But what I am interested here, is to ask a few questions about quantitative genetics, and especially heritability.
Indeed, this notion of heritability is key in genetics (and, by extension, in biology as a whole) but it is often obscure for many biologists (notably for me, until I decided to seriously read some papers, and thus write this post to be sure I understood properly). And you, the reader, even if you think you know everything about heritability, I hope it can still be worth the reading.
* * *
But let’s start with basics: Gregor Mendel and the birth of genetics. This Austrian monk was interested in inheritance (passing of traits from parents to offsprings). To study this phenomenon, he worked with pea plants and chose seven characters to study, but let’s focus on one: seed color.
Mendel observed that some pea plants had green seeds while others had yellow seeds, we will speak of a « green » phenotype and a « yellow » phenotype. Intrigued about what would happen with offsprings, he started by crossing the peas having green seeds among themselves during several generations, idem for the peas with yellow seeds (selfing is possible in this species). After doing this for several generations, he always observed that peas with the « green » phenotype always had offsprings with the « green » phenotype, and reciprocally for the « yellow » phenotype. These offsprings resulting of several generations of selfing were called « pure lines« , as they were obtained by crossing plants with the same phenotype.
Then, Mendel went on to cross a yellow male plant (denoted as P1) with a green female plant (denoted as P2), and named the offsprings the F1 generation. He observed that all F1 plants had green seeds, as if the yellow material disappeared. He also observed such a result when doing the reciprocal cross: a green male plant crossed with yellow female plant gave green offsprings. He therefore decided to qualify the « green » phenotype as dominant and the « yellow » phenotype as recessive.
Then, he continued his experiment, and crossed F1 plants together to obtain F2 plants. And here, what is interesting, is that some F2 plants had the « yellow » phenotype (although most had the « green » one): as if the « yellow » material, somehow, jumped from the P generation to the F2 generation…! At this point, Mendel had the genial idea of counting the plants: he found that 3/4 had the « green » phenotype and 1/4 had the « yellow » phenotype, i.e. a 3:1 ratio.
In front of such a striking observation, Mendel went on to characterize the F2 generation. And here again, he observed something strange: when crossing « green » F2 with themselves, some had only « green » offsprings but some had a mix of « green » and « yellow » offsprings, here also in a 3:1 ratio. This was not the case when selfing the F2 « yellow » plants. Therefore, the F2 « yellow » plant seemed to be « pure » as the P « yellow » plants, but 2/3 of the F2 « green » plants were like the F1 plants, and 1/3 were « pure » like the P « green » plants.
Mendel then realized that the behind the 3:1 ratio was a more fundamental 1:2:1 ratio, 3/4 F2 green is in fact 1/4 « pure » green and 2/4 « impure » green, whereas 1/4 F2 yellow is 1/4 « pure » yellow:
From all this, Mendel was able to develop his theory, summarized in the 5 points below:
- existence of genes, i.e. discrete units (« atomic particles ») of heredity;
- genes are in pairs, i.e. a gene may have different forms called alleles;
- each gamete carries only one member of each gene pair;
- the members of each gene pair segregate equally into the gametes;
- random fertilization, i.e. gametes combine together to form an organism without regard to which allele is carried.
Now let’s recapitulate by noting « A » the « green » allele, and « a » the « yellow » allele. In the case of the pea described above, Mendel hypothesized that the P « green » plants had an « AA » gene pair, called their genotype, while the P « yellow » plants had the « aa » genotype. As each parent can make only one kind of gametes, « A » for the green and « a » for the yellow, the F1 plants must have the « Aa » genotype. But these plants can make « A » gametes as well as « a » gametes, in equal proportions. This gives rise to F2 plants, 1/4 have the « AA » genotype and thus have the green phenotype, 1/2 have the « Aa » genotype and thus the green phenotype also, and 1/4 have the « aa » genotype and therefore the yellow phenotype:
This is Mendel’s explanation of the 1:2:1 ratio. It seems likely that he got it by imagining that F1 needed to have a « bit » of yellow as well as a « bit » of green somewhere. But still, this will be, forever, one of the greatest scientific discovery…
* * *
Ok, now, let’s look at another classical example which, at the end, introduces quantitative genetics, and from there on, heritability. We are now in the early 20th century and the American geneticist, Edward M. East, also has fun with crossing plants. He worked with a species related to tobacco, some of the plants having a long corolla, ~90mm, some having a short one, ~40mm. As Mendel, he started by selfing plants with a long corolla, idem for the plants with a short corolla. Thus, after several round of selfing, he got two « pure » lines: one pure line with a long corolla (we will call them the P1), and one with a short one (P2). He crossed plants from P1 with plants from P2 and got F1 plants having a medium-size corolla, ~65mm.
Until here, everything seemed fine. But when he crossed F1 together to obtain F2 plants, he didn’t get the 3:1 ratio of corolla length. Instead, he got plants with a corolla of ~65mm on average, and a much larger variability (the appropriate mathematical term is variance). To understand a bit more what happened, he chose F2 plants with a small corolla, crossed them, and obtained F3 plants also with a small corolla. When he crossed F2 plants with a medium corolla, he also obtained F3 plants with a medium corolla. And so on (see the picture below).
This means that the inheritance of the corolla length has indeed a genetic component, but not as simple as the Mendelian case seen above. It is very likely that, instead of a single one, it is several genes that influence the length of the corolla.
Let’s imagine that 5 genes are involved, each with two alleles, + and -, and that the each + allele lengthens the corolla by 1mm, whereas each – allele shortens the corolla by 1mm. The P1 plants with a small corolla are likely to have only the – allele at each of the 5 genes, whereas the P2 plants with the long corolla have the + allele at each of the 5 gene. As usual, the F1 plants have 5 – alleles, coming from their P1 parent, and 5 + alleles from their P2 parent. However, when producing the F2 generation, recombination occurs in all gametes from the F1 parents, and thus F2 offsprings will have different numbers of + and – alleles. Consequently, F2 offsprings will have a wider range of corolla lengths than their F1 parents.
There comes quantitative genetics: a trait is called quantitative if it varies continuously. The seed color of Mendel was a discrete trait (qualitative), while the corolla length is a continuous trait (quantitative). And therefore, quantitative genetics is the study of the genetic basis of quantitative traits. Why is it so important? Well, the overwhelming majority of traits are quantitative…
Here are the questions that quantitative geneticists try to answer:
- Is the observed variation in a character influenced at all by genetic variation? (versus environmental variation)
- If there is genetic variation, what are the norms of reaction of the various genotypes?
- How important is genetic variation as a source of total phenotypic variation?
- Do many loci (or only a few) contribute to the variation in the character? How are they distributed throughout the genome?
* * *
I won’t answer all of them because it would be way too long (!), and in these times of whole-genome sequencing, lots of ongoing research is still underway… But I can still say a few words more.
Misconception: what differentiates a Mendelian trait from a quantitative one is the number of genes involved.
Although, the earlier part may suggest it, this is false. The critical difference between Mendelian and quantitative traits is not the number of segregating loci, but the size of phenotypic differences between genotypes compared with the individual variation within genotypic classes. The scheme below should make this clear. It represents the phenotypic distribution according to the genotype at a given bi-allelic locus, height being the phenotype of interest.
Hence, the definition of a quantitative character becomes: a quantitative character is one for which the average phenotypic differences between genotypes are small compared with the variation between individuals within genotypes.
* * *
After all this, what is heritability? Well, it’s simple. Let’s take a given phenotype, whatever it is (seed color, corolla length, hair color, disease status, growth rate, milk production…). The phenotype is the result of the interaction between genotype and environment. That is, a given genotype in a given environment may not lead to the same phenotype as the same genotype in a different environment, or as a different genotype in the same environment. That’s why the notion of reaction norm is essential.
But still, people always want to know how much genes contribute to some phenotypes. Is it possible? Here again, it’s tricky. Let’s take the example of two bricklayers building a wall. If both of them work in parallel, that is one builds the left of the wall and the other builds the right, it is possible to assess their respective contribution: we just have to count the number of bricks each made. But if now one makes mortar, and the other lays bricks, it is not possible anymore to compare their work (it would be absurd to do it). And it is the same for genes.
Thus, what do we do? Instead of trying to assess the contribution of genes to the phenotype (compare to the contribution of the environment), we can try to assess the contribution of genes to variations of the phenotype, using what statisticians call the analysis of variance. When looking at a given phenotype among many individuals, we try to partition the variability of their phenotype () into a variability due to the fact that they have different alleles for the genes involved in the phenotype (), and into a variability due to the fact that they live in different environments ():
Last but not least, the heritability can now be defined:
The question “Is a trait heritable?” is a question about the role that differences in genes play in the phenotypic differences between individuals or groups of individuals.
Misconception: a high heritability means that a character is unaffected by the environment.
Hell, no! Because genotype and environment interact to produce the phenotype, no partition of variation into its genetic and environmental components can actually separate causes of variation. But a highly heritable trait means that the genetic component is much more important than the environmental component in contributing to the variation in phenotype.
* * *
There is still much to say of course, and one can find lots of very well explained details in the reference book Introduction to Genetic Analysis from Griffiths and colleagues. Indeed, I even allowed myself to scan some of its figures as they are very self-explanatory. (But due to copyright issues, I will remove them if the authors and/or publisher ask me to do so.)
I often hear in talks from statisticians that P-values are uniformly distributed under the null. But how can this be? And what does it mean? As the demonstration is pretty straightforward but nonetheless hard to find on the Internet, here it is.
Everything starts with an experiment (or at least with the observation of a natural phenomenon, be it part of an experiment or not). The aim is to assess whether or not the hypothesis we have about this phenomenon seems to be true. But first, let’s recall that a parametric test (see Wikipedia) is constituted of:
- data: the observations , , …, are realizations of random variables , , …, assumed to be identically distributed;
- statistical model: the probability distribution of the , , …, depends on parameter(s) ;
- hypothesis: an assertion concerning , noted for the null (e.g. ), and for the alternative (e.g. with );
- decision rule: given a test statistic , if it belongs to the critical region , the null hypothesis is rejected.
In practice, follows a given distribution under (e.g. a Normal distribution, or a Student distribution) that does not depend on but on . We use the observations to compute a realization, noted , of .
The P-value, noted , can be seen as a random variable, and its realization, noted , depends on the observations. According to the notations, the formal definition of the P-value for the given observations is:
Therefore, according to Wikipedia, a P-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. According to Matthew Stephens (source), a p value is the proportion of times that you would see evidence stronger than what was observed, against the null hypothesis, if the null hypothesis were true and you hypothetically repeated the experiment (sampling of individuals from a population) a large number of times.
Very importantly, note that the 2nd definition emphasizes the fact that, although it is computed from the data, a P-value does not correspond to the probability that is true given the data we actually observed!!
A P-value simply gives information in the case we would repeat the experiment a large number of times… (That’s why P-values are often decried.)
Ok, back on topic now. From the formula above, we can also write:
By noting the cumulative distribution function (cdf, fonction de répartition in French) of under , we obtain:
And here is the trick, thanks to the fact that the cdf is monotonic, increasing and (left-)continuous:
Therefore, we have:
Which means that is following a uniform distribution. And, as this means also that is uniformly distributed, then we can conclude that P-values are uniformly distributed under the null hypothesis.
But what does it mean? Well, we usually consider a significance level, noted (small, e.g. 5%, 1%, 0.1%…), and if the P-value falls below this threshold, we reject the null and decide that the alternative is significant. However, let’s say we re-do the same experiment times and compute a P-value for each of them. Since P-values are uniformly distributed under the null, it is as likely to find some of them between 0.8 and 0.85 than to find some of them below 0.05, if is indeed true. That is, some of them will fall below the significance threshold, just by chance. The experiments corresponding to these P-values are called false-positives: we think they are positives, i.e. we decide to accept , while in fact they are really false, i.e. is true and should not be rejected.
Last but not least, if we re-do the same experiment 100 times and consider a threshold of 5%:
- if is false (although we are not supposed to know it before doing the experiment), how many P-values will fall below this threshold just by chance? 5, on average;
- if now is supposed to be true 50% of the time, what proportion of P-values will be around ? at least 23%, and typically 50% (see the paper of Sellke et al in 2001). In other words, when is true 50% of the time, a P-value of 5% doesn’t tell us anything, as half of the experiments from which they were calculated correspond to a true , and half to a false …
Very likely he got more out of his faith than I out of my doubt. And so, pragmatism is true, he was right and I wrong.
Granville Stanley Hall
(the first to be awarded a PhD in psychology in the US, by Harvard’s philosophy department in 1878)
Prediction is very difficult, especially about the future.
The best way to predict the future is to invent it.
Why should we draw a line between a researcher (describe, explain, predict) and a social actor (involve, convince, transform)? Aren’t they the same (recombine, share and enjoy)?!