Predictive software is very new. Techniques have been around for decades, but only now is computing powerful and cheap enough to use outside government and big corporations. Not surprisingly, a lot of people who are new to it are making a lot of mistakes. And it's one thing if you're Amazon and you miss a few sales. But it's another if you're a company making software to "predict" where crime will happen, and the data you rely on to create your algorithms is racially and/or economically biased. All of the sudden, you're reinforcing an unjust situation.
"Computer algorithm" does not automatically mean "unbiased".