Not so.
The ratings used to weight polls include performance on polls conducted in the final 21 days before every presidential, U.S. Senate, U.S. House and gubernatorial general election (including special elections), as well as every presidential primary, since 1998.
They also factor the pollster’s methodology into the ratings. For example, the first test asks is a polling firm a member of NCPP, a participant in the AAPOR Transparency Initiative, or does it release its raw data to the
Roper Center Archive? And second, does the firm regularly call cellphones in addition to landlines? Each firm gets a methodological score between 0 and 2 based on the answers to these questions. They also adjust for the number of polls conducted.
Another complication is herding. The methodologically inferior pollster may be posting superficially good results by manipulating its polls to match those of the stronger polling firms. If left to its own devices — without stronger polls to guide it — it might not do so well.
There’s more to it, you can read about it here:
See FiveThirtyEight’s pollster ratings. Pollster ratings were one of the founding features of FiveThirtyEight. I was rating pollsters before I was building elec…
fivethirtyeight.com
Bottom line is the polls are averaged based on a formula that was developed and back tested over time to produce the most accurate results. If a factor improved results it’s included, if it makes results worse, it’s excluded.
I wouldn’t worry too much about Survey Monkey, with a D- rating it doesn’t carry much weight. But again, Nate found that including polls with a low rating improved predictive results when they are weighted appropriately vs excluding them all together. I expect that’s even more true in situations like state polls where there are few polls to begin with. It’s part of the reason he has been able to call every state and every senate seat in past elections.