The nature of prediction is such that you aren't always going to predict correctly. That doesn't mean it's a stupid exercise to make the prediction, or that the prediction itself was wrong. The statement "the next roll of a fair-sided die is more likely to be somewhere between 2 and 6 than it is to be 1" is still correct, and sensible, even if you then roll the die and get a 1. People seem determined, with polls, to just ignore this probabilistic element of the prediction when calling it wrong.
The other thing is that there is rather a massive selection bias in calling polls wrong. Specifically, people always pick the ones that *are* wrong, and forget about the many that were right, or at least nowhere near as wrong as they are painted. In 2012, polling data combined with statistical modelling enabled Nate Silver to correctly call the results in all 50 US states in the presidential election; in 2008, he got it right in 49/50 states. This year, his predictions were less accurate, but that doesn't erase the success of the previous two.
The errors are anyway often exaggerated. It now seems clear that Clinton won the popular vote by a small margin, maybe no more than 1 or 2 percentage points when all votes are counted; The final predictions gave her a three or four point lead nationally. This is not actually a ridiculous difference, although the small swing appears to have been concentrated in a couple of states that were enough to tip the result in a huge way for sure. But the scientific polling prediction was not actually that far away, certainly not enough to dismiss the entire field as a waste of time. It's just that politics matters, and is in the public eye, and in this case small errors compounded to make a huge difference.
Having said all that, it's clear that there are some systematic problems that have emerged, in particular in the last couple of years, that polls have failed to account for properly. The major error was in judging support for Clinton in the northern Midwest, states like Michigan, Wisconsin, Ohio and Pennsylvania. It seems to have either been missed in a couple of key demographics, or exaggerated in others -- and, despite reports earlier, turnout was somewhat lower than expected in some key states.
Polling predictions are never 100% accurate, but they are by definition about the only meaningful way you can ever judge the mood of the public between, and leading up to, elections. My own feeling is that they have, perhaps, become too attached to the system they are trying to measure. In the 2014 Scotland Independence referendum, for example, the polls did a pretty good job in the end of predicting the outcome, but there was that one poll that gave a lead to the "Yes" campaign. And everyone took this seriously on the "No" side, and some politicians panicked and offered things they had no right to offer, with guarantees or promises they were in no position to make. Certainly, the campaign was re-energised on both sides after that poll, as people who might not have voted on either side turned out, either to affirm that poll result or to defeat it. One way or another, though, it became part of the story. The same thing may have happened in the 2015 General election (another small miss). Polls consistently showed that Labour were unable to win outright, leading to questions about a Labour/ SNP coalition that Ed Miliband was forced to answer. I don't think anyone believed him when he said he'd never do it, but surely the row damaged his reputation somewhat, and I suspect that played a part in a late swing towards the Tories.
In that sense, perhaps we could all do with having less emphasis on polling predictions anyway. But this is absolutely not because they are "wrong", or "defunct", or "useless". Anyone who cares too much about them, though, risks drawing the wrong conclusions.