I want to believe they are correct because they are predicting a big pickup with Dems getting 240 seats as of now, but I’m not trusting the polling.
On the positive end they have some of the biggest samples of any polling out there–especially for House elections. If I’m them and defending myself, I tell you that by reaching more voters they are getting a better picture of the year’s dynamics. The most helpful part of the polling is it gives better snapshots of sub groups as bigger samples mean smaller MoEs for those sub samples. CD even points this out on their site
That could be true, but generally polling is relatively accurate and those that diverge most from the average are usually more wrong than others.
Two key issues seem relevant here. First, do the automated calls make a difference. Generally, Rasmussen and Survey USA have done good jobs in recent elections so I don’t automatically dismiss them, but I also know we don’t fully understand the effects of automated polling on respondents.
Second, and where most polling has issues, is how they determine likely voters. In this case they do by selecting a pool of voters that are judged to be likely voters from the voter files themselves. Different screens are used by different companies. The instrument gives a quick second crosscheck that the individual is actually registered to vote in their methodology section. They also provide the response rates which is pretty good for a polling company.
The voter file step is not typical for most polling firms and I’m not sure how it affects accuracy. I can come up with a theoretical reason why it should help accuracy, but that’s ultimately an empirical question.
So I don’t have a strong conclusion to make about the Constituent Dynamics/RT Strategies Majority Watch project other than to say they aren’t doing anything unreasonable, but I’m still concerned about their results because they seem to be diverging more than others in some key races.