P_Lex works as follows. First, it divides the text into a set of 10-word segments. Second, each word is categorised as ‘easy’ (the most frequent 1000 words, proper nouns, and numbers) or ‘difficult’ (all other words). Then, P_Lex calculates the number of segments containing zero difficult words, the number of segments containing one difficult word, and so on. This creates a curve such as the one illustrated in Figure 1. In the illustrated case, the ratio of the number of segments containing zero difficult words to the total number of segments is 0.4, that for the number of segments containing one difficult word is 0.4, and that for two difficult words is 0.2. The curve acquired is then fit to already established theoretical curves, each of which has a lambda (λ) value. In this case, the data matches a theoretical curve with lambda = 0.92, and therefore, the P_Lex score for this text is 0.92. The authors state that higher scores correspond to a higher proportion of infrequent words in a text and thus a lexically richer text.