There's a risk in relying on key-word detection that people's totally innocent conversations can be flagged up just for discussing an issue without supporting it. Automatic technology, if not implemented properly, can suck. Rather like bad censorship of swearwords. I've never quite got over the (separate but related) shock I felt when, for whatever reason, I was discussing "a mishit" (a legitimate word), and the forum using swearword censorship technology rendered it as "mi*doodoo*". Yes, really.
In a similar way, an automated programme searching for, e.g. "Taliban" or "beheading" might lead to some gulty parties being detected earlier, but would presumably also mean people discussing Game of Thrones (or a video made by a Belgian TV company where the subtitles randomly popped up with "alqaeda" a couple of times, despite being entirely about Maths (or so I'm told!)) getting flagged up. That's not really ideal.
And anyway the is the change in philosophy -- all conversations of any kind become monitored in some way, with no cause in most cases. Even if it stays as an automated process to start with, it's hard to see how this wouldn't lead to innocent people getting caught in the net along with the intended targets.