Machine Studying 101: The What, Why, and How of Weighting

By Eric Hart, Altair.



One factor I get requested about loads is weighting. What’s it? How do I do it? What do I would like to fret about? By well-liked demand, I just lately put collectively a lunch-and-learn at my firm to assist handle all these questions. The aim was to be relevant to a big viewers, (e.g., with a delicate introduction), but in addition some good technical recommendation/particulars to assist practitioners. This weblog was tailored from that presentation.


Mannequin Fundamentals

Earlier than we discuss weighting, we should always all get on the identical web page about what a mannequin is, what they’re used for, and among the frequent points that modelers run into. Fashions are mainly instruments that people can use to (over-)simplify the actual world in a rigorous means. Typically, they take the type of equations or guidelines units, and so they often attempt to seize patterns which might be present in information. Individuals typically divide fashions into one among two high-level classes by way of what they’re getting used for (and sure, there may be overlap): Inference and Prediction. Inference means attempting to make use of fashions to assist perceive the world. Suppose scientists attempting to uncover bodily truths from information. Prediction means attempting to make guesses about what’s going to occur. For a lot of the remainder of this dialogue, we’re going to be targeted on fashions constructed with functions of prediction in thoughts.

In the event you’re constructing a mannequin to make predictions, you’re going to wish a solution to measure how good that mannequin is at making predictions. A superb place to start out is with Accuracy. The Accuracy of a mannequin is solely the proportion of its predictions, which develop into appropriate. This is usually a helpful measure, and certainly it’s basically the default measure for lots of computerized instruments for coaching fashions, but it surely definitely has some shortcomings. For one factor, accuracy doesn’t care about how doubtless the mannequin thinks every prediction is (the chances related to the guesses). Accuracy additionally doesn’t care about which predictions the mannequin makes (e.g. if it’s deciding between a optimistic or destructive occasion, accuracy doesn’t give one among them precedence over the opposite). Accuracy additionally doesn’t care about variations between the set of information the accuracy is measured on and the actual inhabitants which the mannequin is meant to use to. These items may be essential.

Let’s bounce into an instance. Lately, the 2019 World Collection concluded. I’m going to distinction some predictions for that world sequence beneath. Spoiler Alert: I’m going to let you know who gained the world sequence.

The 2019 world sequence happened between the Washington Nationals and the Houston Astros

Let’s examine some fashions. I’ve an inside mannequin in my head, which I gained’t clarify. It gave these predictions for every of the 7 video games of the sequence:

Apparently, I used to be fairly excessive on the Nationals. In the meantime, one among my favorite web sites, FiveThirtyEight, has a sophisticated statistical mannequin which they use to predict games. It gave these predictions:

It’s price noting: these are particular person sport predictions, made upfront of every sport if it was going to occur; these usually are not predictions for the sequence as a complete. Because the sequence is a best-of-seven, as soon as a group wins 4 video games, no extra video games will likely be performed, so clearly neither of the 2 units of predictions above will occur precisely as is.

These predictions are based mostly on issues like group power, house area benefit, and specifics of which pitcher is beginning every sport. FiveThirtyEight’s mannequin predicted the house group to win in each sport, besides sport 5, when Houston’s greatest pitcher, Gerrit Cole, was pitching in Washington.

Anyway, let’s spoil the sequence, see what occurred, and evaluate accuracy. Right here was the actual end result:

The sequence went 7 video games, with Washington profitable the primary and final 2. (As an apart, the away group gained each sport on this sequence, which had by no means occurred earlier than on this, or another, main North American skilled sport.)

So, let’s have a look at how we every did:

My end result:

I acquired 5 of the 7 video games proper, for an accuracy of zero.71

FiveThirtyEight’s Consequence:

FiveThirtyEight acquired solely 1 sport proper, for an accuracy of zero.14.

So, you may be tempted to suppose that my mannequin is healthier than FiveThirtyEight’s, or at the least that it did higher on this case. However predictions aren’t the entire story. FiveThirtyEight’s mannequin comes not solely with predictions, however with useful chances, suggesting how doubtless FiveThirtyEight thinks every group is to win. On this case, these chances regarded like this:

In the meantime, I’m the jerk who’s all the time sure he’s proper about every thing. You all know the sort. My chances regarded like this.

With this in thoughts, though I had the next accuracy, you may rethink which mannequin you like.

It’s essential to debate this as a result of we’re going to be speaking about accuracy once more quickly whereas addressing among the shortcomings of sure fashions. It’s essential to do not forget that accuracy is usually a helpful metric, but it surely doesn’t all the time offer you a full image. There are different instruments for measuring the standard of fashions that take note of issues like chances, and different extra delicate ideas than simply prediction. A number of examples I like to make use of are Raise Charts, ROC Curves, and Precision-Recall Curves. These are good instruments that can be utilized to assist select mannequin, however none of them are fool-proof on their very own. Additionally, these instruments are extra visible, and sometimes, machine studying algorithms want a single quantity beneath, so accuracy finally ends up getting baked into lots of algorithms because the default measure of selection, even when that’s detrimental to some exterior aim.


The What of Weighting

Think about the next information desk:

Say you need to construct a mannequin to foretell the worth of ‘DV’ utilizing solely Age and Intercourse. As a result of there is just one instance with DV=1, you could have a tough time predicting that worth. If this can be a dataset about credit score lending, and the DV is whether or not somebody goes to default (1 which means they defaulted), you might care extra about being appropriate on the DV=1 circumstances than the DV=zero circumstances. Take a minute and see should you can give you mannequin (e.g. some guidelines) to foretell DV=1 on this dataset. (Spoiler, you’ll be able to’t presumably get 100% accuracy).

Listed here are some attainable examples:

  1. Intercept Solely Mannequin
    • At all times Predict zero
    • Accuracy = zero.9
  2. 27 Yr Olds
    • Predict DV=1 for 27-year-olds, and zero in any other case
    • Accuracy = zero.9
  3. Younger Girls
    • Predict DV=1 for ladies underneath 30, and zero in any other case
    • Accuracy = zero.9

All these fashions have the identical accuracy, however a few of them could also be extra helpful in sure use circumstances. In the event you ask a choice tree or a logistic regression algorithm to construct a mannequin on these datasets, they are going to each offer you mannequin #1 by default. The fashions don’t know that you simply may care extra concerning the DV=1 circumstances. So, how can we make the DV=1 circumstances extra essential to the mannequin?

Thought #1: Oversampling: Duplicate present uncommon occasion information and go away frequent occasion information alone.

Thought #2: Undersampling: Take away among the frequent occasion information, preserve all of the uncommon occasion information.

In each circumstances, we manually use a special inhabitants to coach the mannequin on than what we naturally had, with the categorical function of affecting the mannequin we’re going to construct.

Weighting is form of like this, however as an alternative of duplicating or eradicating information, we assign completely different weights to every file as a separate column. For instance, as an alternative of doing this:

We might simply do that:

You may legitimately ask if it’s attainable for machine studying algorithms to deal with weighted information this manner, however a legit reply is: “yes, it’s fine, don’t worry about it.”

Actually, we don’t even want to provide integer weights. You would even do one thing like this:

However for now, let’s persist with the integer weights. Say you need to construct your mannequin on the weighted dataset above:

Then if we revisit the attainable fashions we mentioned earlier than, we see this:

  1. Intercept Solely Mannequin
    • Weighted Accuracy (for coaching mannequin): zero.82
    • Actual Accuracy = zero.9
  2. 27 Yr Olds
    • Weighted Accuracy (for coaching mannequin): zero.91
    • Actual Accuracy = zero.9
  3. Younger Girls
    • Weighted Accuracy (for coaching mannequin): zero.91
    • Actual Accuracy = zero.9

By weighting, though we haven’t modified the actual accuracy, we’ve modified the weighted accuracy, which might trigger the primary choice to be much less fascinating than the latter two at coaching time. Actually, should you construct a mannequin on this weighted desk, a choice tree naturally finds the second mannequin, whereas a Logistic Regression naturally finds the third mannequin.

A typical query I get about weighting is: how do you deal with issues after mannequin coaching? Do you must apply your weights throughout validation or deployment? The reply is No! The purpose of weighting was to affect your mannequin, to not affect the world. As soon as your mannequin is skilled, your weights are not wanted (or helpful) for something.


The Why of Weighting

Within the earlier part we acquired a imprecise sense of why you may need to use weighting to affect mannequin choice, however let’s dive in somewhat deeper.

One cause you may need to use weighting is that if your coaching information isn’t a consultant pattern of the information you intend to use your mannequin to. Pollsters come throughout this downside on a regular basis. Enjoyable reality: with a very random pattern of simply 2500 folks, pollsters might predict election outcomes inside +/- 1%. The issue is how you can get a very random pattern. That is tough, so pollsters routinely apply weights to completely different ballot contributors to get the pattern demographics to extra intently match the anticipated voting demographics. There’s an ideal story from 2016 about how one individual was accountable for strongly skewing a whole ballot. You possibly can examine it here.

Right here’s a typical argument I hear for an additional cause you may need to use weighting:

  1. Fraud is uncommon
  2. Have to upweight uncommon circumstances
  3. Then the mannequin performs higher general

This frequent argument seems to be incorrect. Normally, weighting causes fashions to carry out worse general however causes them to carry out higher on sure segments of the inhabitants. Within the case of fraud, the lacking piece from the above argument is that failing to catch fraud is often costlier than falsely flagging non-fraudulent transactions.

What we actually need to discuss as cause for weighting is cost-sensitive classification. Some examples:

  • In fraud detection, false negatives are likely to price greater than false positives
  • In credit score lending, defaults are usually costlier than rejected loans that might not have defaulted
  • In preventative upkeep, half failures are usually costlier than untimely upkeep

All of those are solely true inside cause. You possibly can’t go all the way in which to the acute, as a result of flagging each transaction as fraudulent, rejecting all mortgage purposes, or spending all of your time repairing programs as an alternative of utilizing them are all clearly dangerous enterprise selections. However the level is, sure sorts of misclassification are costlier than other forms of misclassification, so we’d need to affect our mannequin to make selections which might be incorrect extra typically, however nonetheless inexpensive general. Utilizing weighting is an inexpensive instrument to assist remedy this downside.


The How of Weighting

Word upfront: lots of the good info from this part comes from the same paper.

This brings us to an essential query: how do you weight? With most trendy information science software program, you simply use pc magic and the weights are taken care of for you.

However, after all, there are nonetheless essential selections to make. Do you have to over-sample or under-sample, or simply use weighting? Do you have to use superior packages like SMOTE or ROSE? What weights do you have to choose?

Nicely, excellent news: I’m going to let you know the optimum factor to do. However after all, since I’m a knowledge scientist, I’m going to provide you lots of caveats first:

  1. This methodology is just “optimal” for cost-sensitive classification points (e.g. you have got completely different prices for various outcomes of what you’re predicting). I’m not going to assist pollsters out right here (sorry).
  2. This methodology assumes the prices related along with your downside, and that the prices are mounted. If the true prices are unknown, or variable, this will get loads more durable.

Earlier than we get proper right down to the formulation, let’s discuss somewhat entice that you simply may fall into baseline/reference errors. Right here’s a typical wanting cost-matrix instance from the German credit score dataset from the Statlog project.

Precise Dangerous Precise Good
Predict Dangerous zero 1
Predict Good 5 zero


In the event you have a look at this for a couple of minutes, the rationale for the place this got here from turns into clear. That is meant to indicate that should you misclassify somebody who will default and provides them a mortgage, you’ll lose $5, however should you misclassify somebody who wouldn’t have defaulted and don’t give them credit score, you lose the chance to make $1, which is itself a price.

However one thing is incorrect right here. If the intention is that once you predict dangerous you don’t lengthen somebody’s credit score, then it appears clear that the prices related to predicting dangerous needs to be the identical it doesn’t matter what would have really occurred. Actually, this price matrix is probably going alleged to be this:

Precise Dangerous Precise Good
Predict Dangerous zero zero
Predict Good 5 -1


The place there’s a destructive price (or profit) of predicting good on somebody who will really repay their mortgage. Whereas these two price matrices are related, they don’t seem to be interchangeable. In the event you think about a scenario the place you get one buyer of every kind, the primary matrix suggests your whole prices will likely be $6, and the second suggests your whole prices will likely be $4. The second is clearly what is meant right here. For that reason, it’s typically useful to border your matrix by way of advantages as an alternative of prices. That mentioned, I’m nonetheless going to provide the system by way of prices, and go away it to the reader to determine how you can translate it to a profit system.

So, say you have got a price matrix that appears like this:

Precise Adverse Precise Optimistic
Predict Adverse C00 C01
Predict Optimistic C10 C11


First, we’ll state 2 reasonableness standards:

These standards mainly say that your prices are greater should you misclassify one thing than should you don’t. Affordable, proper?

Now outline:

Then it’s best to upweight your destructive examples by Equation (and go away your optimistic examples alone).

Really, we are able to even take this a step additional. Many machine studying fashions produce chances (versus simply predictions) after which use a threshold to transform that chance right into a prediction. In different phrases, you have got some guidelines like: if the chance of being optimistic is larger than zero.5 predict optimistic, in any other case predict destructive. However you can as an alternative have a rule that claims one thing else, e.g., if the chance of being optimistic is larger than zero.75 predict optimistic, in any other case predict destructive. The quantity right here (zero.5 or zero.75 within the above examples) known as a “threshold.” Extra on that’s coming somewhat later, however for now, we are able to write down a system for how you can make a mannequin with threshold pzero act as if it has threshold p*, we are able to try this by upweighting the destructive examples by:


Word that within the particular case the place pzero is zero.5 (as is typical), this reduces to the earlier system.

As soon as once more, it’s price noting that the case of variable prices is loads more durable. In the event you’re utilizing determination timber, some smoothing is a good suggestion, however typically, it’s too concerned to say way more right here. However you’ll be able to take a look at this paper for some concepts.


The Why Not of Weighting

Weighting is form of like pretending to stay in a fantasy world to make higher selections about the actual world. As a substitute of this, you can simply make higher selections with out the faux half.

Let’s return to our dialogue about thresholds from the earlier part. As a substitute of utilizing weights to make a machine studying mannequin with one threshold behave as if it had a special threshold, you can simply change the brink of that mannequin instantly. This could (presumably) result in a poorly calibrated mannequin, but it surely may nonetheless be enterprise determination as a result of it’d scale back prices.

Actually, should you return to the price matrix thought of beforehand,

Precise Adverse Precise Optimistic
Predict Adverse C00 C01
Predict Optimistic C10 C11


the optimum threshold for making selections with the intention to reduce prices is exactly p*

That is typically easier, and empirical proof means that adjusting the brink instantly works somewhat higher than weighting in practice.

So mainly, in spite of everything that discuss weighting, my normal recommendation is, if attainable don’t weight in any respect, however modify your threshold as an alternative! However after all, lots of the concepts overlap, so hopefully, this dialogue was nonetheless helpful.

Additionally, one final tip: In the event you aren’t anxious about cost-sensitive classification issues (e.g., issues the place you have got completely different prices related to completely different predictions and DV circumstances), you most likely shouldn’t be excited about utilizing weighting as a instrument. Until you’re a pollster.


Bio: Dr. Eric Hart is a Senior Knowledge Scientist on the companies group at Altair. He has a background in engineering and arithmetic, and likes utilizing his downside fixing talent to deal with attention-grabbing information challenges. He’s additionally a Blokus world champion.


About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *