We are committed to providing fast, efficient, and affordable software solutions that set new standards in the software development industry.
  • DeepMind Hires Academics on AI Ethics Committee
Technology Articles > Software > Security & Privacy > DeepMind Hires Academics on AI Ethics Committee

Artificial intelligence company DeepMind (recently acquired by Facebook) has set up an ethics committee composed of academics and experts from the charity sector. The company hopes that these professionals will help shed some light on AI and ethics - something that has to be done as has been appearing lately.

The DeepMind ethics committee is not a new thing. DeepMind had originally established this committee a few years ago, but it was dissolved in 2016. When DeepMind was acquired by Facebook, the company was promised that the ethics committee would grow and remain in place but that didn’t happen (at least not in public) immediately.

Now, the ethics committee has been reestablished.

Why AI Needs Ethics

The absolute best modern example of the need for ethics in AI research is Facebook. The company has long since allowed algorithms to run the show, but this has resulted in a bit of a disaster. This past month, Facebook was criticised for allowing its algorithm to create ad categories that were clearly anti-semitic and racist.

Various ads were placed in these categories by Russian lead rally groups hiding behind anonymous Facebook group guises. Facebook has also come under fire for hiring human moderators to view violent videos and other disturbing material that the company is trying to prevent from spreading across the internet. How can AI help with any of this?

The AI Key

Algorithms running on their own are already well steeped in AI. The problem is that computers do not know about ethics or how to act ethical. This is precisely what the DeepMind ethics committee is trying to fix. AI can also help humans that have to moderate violent material - if the right AI can be created, computers can then monitor sites like Facebook for this content. This would take away the shock and mental trauma that monitors now face.

But how do you teach machines to be ethical? Further, whose ethics are they? Ethics do vary widely from one part of the world to the next - and from religion to religion, for that matter. Bringing in ethical experts is a good start but it developing a set of ethics for computers to follow may be a deeper dive than was first expected. It might also be one that takes many years to create.

The Burning Question

The other question behind AI ethics is whether or not machines can actually learn ethics. The human mind is intricate and full of ‘ethical webs’ - meaning that one ethical rule or value might be easily taught, but there are a whole other set of values that have to be followed as well. And then there’s the problem of logic. Can machines really learn to be ethical based on logic? These are the types of questions that the DeepMind ethics group will attempt to answer.

It is unclear at this time what the group will be focusing on precisely. DeepMind hasn’t told press too much about the newly established ethics committee - other than that the group is made up of academics, mostly. Ethics have to be part of artificial intelligence development, that much is for certain. What is uncertain is how much capacity machines have for learning this material - and then for developing that material on their own into a context that makes sense.