There’s an intriguing article up at Quartz about AI self-replication. In the middle of it, there is an interesting tidbit about AI adopting the biases and flaws of its developers. Basically, an artificial system will accept whatever data and rules are programmed into it and become reinforced through new data inputs.
It may go without saying but data and technology don’t come with built-in morals or ethics. If you analyze a spreadsheet that has incomplete or inaccurate data (whether you realize it or not) you will have inaccurate results.
Ethics and morals (and yes, they are different) are human constructs. They are based on our views of the world, culture, right, and wrong. Never assume that the systems you work with have these constructs.