The following interesting articles were shared:
Ethics could be programmed into AI solutions as business rules or boundaries, so the burden is placed on the designers and programmers.
If the AI outcomes are not "humane", then a social responsibility concern could arise. For example, if AI regulates the prescriptions to patients currently in palliative care, the algorithm might incorporate the low life expectancy and allocate lower quality pharmaceuticals and treatment options on that basis.
AI could also predict future behavior based on prior patterns and choices, providing sufficient predictive information that an invasion of personal privacy could occur to the detriment of the people affected.
I recommend reading the books by Michael Lewis, particularly Fifth Risk, Flash Boys, and the Undoing Project, to gain a perspective on the effects of AI and algorithms on human choices and behaviors.
You say AI outcomes are the result of algorithms and programming logic. In a strict sense, this is true, but in many, if not most cases, the algorithms and logic are not generated by programmers, but by other, higher level algorithms. These higher level algorithms infer logic rules (or something somewhat like logic rules) from examples presented to them. Thus if there is implicit bias in the training examples, there is an at least moderate probability that there will be bias in the outcomes from the AI, even though it was not deliberately programmed into the system.
Some types of AI do generate explicit rules by, for example, building decision trees which can be examined. But other types of AI, such as neural networks, create decision models that cannot really be interpreted as logic. And even decision trees can contain logic that operates on variables that are not themselves inherently discriminatory, but are correlated with other variables in such a way that the result might be discrimination. Zip codes are an easy example.
This is just to say that we may not be able to strictly speaking "program" the ethics in. We will need to be conscious of the possibility of inadvertent bias in the choice of training examples and also test for biased results that slip through anyway.
Of course, this does not account for the possibility that some individuals may actually try to implement systems with systematic biases for their own nefarious purposes.
That said, it will be interesting to see how we deal with biases that are actually true (e.g., certain groups of people are generally taller/stronger/slower than others, but also others that people don't want to be true but actually are). So the AI will pick up on those differences (not just about people) and it is real learning. But if people don't like what it learns they will insist of tampering with the model, which will actually weaken the logical application.
Frankly I'm not sure we're bright enough as a species to handle what the potential is. Look at how poorly we do as managing physical assets, which are concrete (no pun, really).