2.7 - Word Strength & Groups

By Brandon Beaver • Published on October 24, 2024
The LSAT often asks us to evaluate numbers, proportions, and overlapping groups. Some students really struggle with these concepts. For others, they're are a breeze.
If you suck at math, don't worry. You only need a basic understanding of proportions and percentages.
This lesson will cover:
  • Common terms, the quantities they signify, and what it means to logically negate them
  • How these terms interact and reveal information about groups
  • Some flaws you'll encounter involving numbers, proportions, and groups.
Let's get into it.

Word Strength

This lesson's central idea is word strength, meaning the relationship between certain words and the probabilities or amounts they logically indicate.
We can lump terms into four main buckets: none, some, most, and all.

None

None is an objective term meaning zero—nada, zilch, zip.
If I say, "None of these people play basketball," then zero people in this particular group play basketball, no matter the group's size.
Intuitively, one might think that all is none's logical opposite. Idiomatically, perhaps, but not logically.
In fact, it's any positive number that isn't zero, up to and including 100%.
In other words: some.

Some

Unlike none, some is a subjective term that means any positive number, or any percentage from 1% to 100%. Before the math nerds bite my head off... yes, positive percentages less than 1% also qualify (e.g., 0.1%), but I'm using whole numbers to keep things simple.
By subjective, we mean the quantity represented isn't specific. We need further information to determine a precise figure or quantity.
For example, "Some people prefer dogs to cats," just means that one or more people prefer dogs to cats. We don't know how many and it could be any positive number.
Bottom line: when you read some on the LSAT, it means a non-zero number or a percentage ranging from 1% to 100%.
Here are other terms you'll encounter that mean the same thing as some
  • many
  • often
  • several
  • frequently
  • a lot
  • much
Notice what these terms all have in common. They each need a little clarification: How many? How often? How frequently?
Recall from before, some's logical opposite is none—if we don't have at least some, we know we have none.

Most

Most leads us to the land of majority. In other words, most means any number representing more than half of a group, or a percentage from 51% to 100%.
Most tells us more than some, but there's still subjectivity. For instance, if we're told "Most people are right-handed," we know it's more than half. So while 49% wouldn't qualify, 51%, 66%, or 99.99% all could.
Here are some terms that mean the same thing as most on the LSAT:
  • majority
  • probably
  • likely
  • more often than not
When it comes to most specifically, students often struggle with how most means the same thing as terms like probably or likely.
Simply put, if something will probably happen, it means there's a better chance that it will happen than not happen—51% or more. The same is true of likely. If it's likely to happen, it's more likely than not.

All

All shifts us back from subjective to objective. All means 100%—every member in a given group.
For instance, if the LSAT tells you "All cats purr," then 100% of cats share that characteristic (at least for that particular question).
I've seen other LSAT prep providers argue that all and none belong in the same bucket because they're both objective terms—the amounts they specify are indisputable. I don't disagree.
I choose to separate them mostly because they have different logical opposites. The logical opposite of none is some, but the logical opposite of all is simply not all. Some covers 1% through 100% whereas not all covers 0%-99.99%
Think back to our cats-purring example. If it had said "Not all cats purr", then there would be at least one cat out there that didn't purr.
Here are some other terms you'll encounter that mean all:
  • each
  • any
  • every
  • entire
  • whole

When These Terms Interact

So far, we've looked at some basic examples of what these terms mean on their own, but what about when they interact?
When we get multiple conditions that use these terms, we get varying degrees of overlapping or concentric groups. Think Venn diagrams or circles-within-circles.
Let's look at some examples.

Combining Objective Terms

Imagine the LSAT hit us with the following:
All Schnauzers are dogs. All Schnauzers bark.
We can infer a few things here.
First off, we have three groups: dogs, dogs that bark, and Schnauzers.
Since all Schnauzers are both dogs and barkers, we know there's at least some amount of overlap in the two characteristics. Even if there's only 1 Schnauzer on earth, it's a dog and it barks, meaning there's at least 1 dog that barks.
We could also infer that at least some dogs that bark are Schnauzers and that any dog that doesn't bark is not a Schnauzer.
Now let's switch things up to cover none.
All Schnauzers are dogs. No Schnauzers bark.
Here we get totally different inferences.
For instance, we no longer know if any dogs bark at all. That probably sounds absurd, but we have to play by these rules, and the rules don't explicitly tell us about barkers, just some dogs that don't.

Combining Subjective Terms

So what happens when we combine subjective terms, like some with some? Not much, really.
Let's revisit the Schnauzers:
Some Schnauzers bark. Some Schnauzers don't bark.
What did we learn from this?
We establish two groups: some non-zero number of Schnauzers bark and some non-zero number don't.
We also disqualify the extremes, which just means you can't be both a Schnauzer that barks and a Schnauzer that doesn't. This is important to note given that some doesn't necessarily disqualify the possibility of all.

Combining Most with Some or All

How about majorities? What happens when they get involved? Back to the Schnauzers:
All Schnauzers are dogs. Most Schnauzers bark.
Here we know somewhere between 51%-100% of Schnauzers bark, which also implies that some Schnauzers don't. So we can infer that some dogs bark and some dogs don't.
Now consider the some equivalent:
Some dogs are Schnauzers. Most Schnauzers bark.
Here, we know that there's some non-zero number of dogs categorized as Schnauzers and that more than half of them bark. So we can infer that at least some dogs bark but we don't know what percent of dogs happen to be Schnauzers.
Remember how I brought up Venn diagrams and concentric circles earlier? If you're a visual learner, these tools can help make sense of majorities, overlaps, etc.
I use this framework all the time, exaggerating majorities and minorities to make relationships more obvious. Give it a try next time you struggle to understand groups in your head.

Common Statistical Flaws

The LSAT loves to test these kinds of overlaps and will often incorporate specific numbers or percentages. Here are some common ways you'll be tested.

The Gambler's Fallacy

Go flip a coin 10 times. How many came up heads? Before you flip it again, will the next one be heads or tails?
If you said anything besides, "I don't know," you've committed the Gambler's Fallacy, that age-old issue of assuming you're "due" for a certain result just because it hasn't happened for a while.
This flaw boils down to the relationship between probability and prediction.
Probability helps us understand the likelihood of an event, especially as we increase the frequency of the event. But it can't logically determine the outcome of any given instance of the event.
In the case of our coin flips, past flips don't predict the result of any given flip.
This flaw takes many forms on the LSAT, but here are a few shorthand examples:
  • Because previous studies all show X, the next study will definitely show result X
  • Because we've always done X, we shouldn't change to Y
  • Johnny homered in his last 4 games, so he's going to hit one tonight.
Be prepared to call BS. Past performance doesn't logically predict future results.

Percent Change ≠ Number Change

Imagine a podcast producer made the following argument:
Last year, our listeners were 60% men, 40% women. Near the end of the year, we hired Amy as a cohost. Amy clearly brought in more female listeners—our demographics now split 40% men, 60% women.
If your gut reaction was, "Wait, what?" Good. Lean into that. Our podcast producer is confusing a change in percent with a change in whole numbers.
There are lots of ways the listener demographics could have shifted that didn't include an increase in female listeners. What if Amy's a firebrand whose commentary scared away a bunch of male listeners? Totally plausible.
Use fake numbers to make your objections more concrete: Last year, we had 100 listeners (60 male, 40 female). This year, we have 40% male listeners and 60% female listeners... but only 50 listeners total (20 male, 30 female). Not only did Amy not add to the total number of female listeners. She may have caused a drop in the overall number of both demographics.
This flaw typically hides in language about rates or percentages. As always, read carefully.

Confusing Proportions and Whole Numbers

The LSAT loves to make this error in both directions: either giving us facts about proportions and then concluding something in terms of whole numbers, or vice versa. Here's an example:
Gus is an impeccable drug dog. While he does have a false-positive rate of 1%, he's correct 100% of the time when a container is actually concealing drugs. Therefore, 99% of the time Gus signals, there are drugs in the container.
Not necessarily. Like before, let's use real numbers to tease this apart.
What if Gus is involved in 101 drug stops this month, only 1 of which actually involves drugs? On that one particular stop, Gus gets the good-boy. But when we account for his false-positive rate, he mistakenly signals on 1 of the other 100 stops.
That's 2 signals in 101 cases with a 50% accuracy rate, not 99%.
I forget which test it comes from, but there's a great LR question about bomb detectors and their success rates versus their false positives that beautifully articulates this flaw. When I come across it again, I'll be sure to update this lesson with a link to the question.

Mistaking Overlaps Between Groups

It's easy to mistake relationships between groups when passages shift back and forth between subjective and objective terms.
Let's revisit the Schnauzers one more time (then no more Schnauzer-talk the rest of the course—promise!).
All Schnauzers are dogs. Some dogs bark. Therefore, some Schnauzers bark.
This flawed argument contains a nested conditional: If it's a Schnauzer, then it's a dog. That's all well and good.
Then we get a subjective statement: Some dogs happen to bark. Cool, but how many? What else do we know about these barkers? Anything?
It finishes with a bad conclusion: Some Schnauzers must bark. Not so.
In this example, it seems like being a dog would be the characteristic to key in on. But the conclusion focuses on the unshared characteristics—being a Schnauzer and barking.
When you run into scenarios that don't make sense, slow down and ask yourself: What groups are involved, what characteristics do they and don't they share, and how do you know for certain? The Venn diagram exercise I mentioned earlier helps a ton with these.
---
Speaking of how you know for certain, join us in our next lesson where we get to the bottom of what could or must be true. See you there.

Related Lessons