Bias is a tricky term in general, and psychiatrists have developed long treatises trying to explain what it is and how it works.
Most biases tend to creep into AI unintentionally, both in the coding of the algorithm and the selection of training data. This means organizations must actively counter this bias by fostering diversity, training employees to spot biases and in general constantly monitor the output of AI processes to ensure that the results are fair.
Without the ability to account for the bias that exists all around us, it will never provide equal service to all. And even then, we must avoid the temptation to think that we will achieve a state of perfect fairness from AI.