As AI becomes more pervasive in society, researchers have a responsibility to develop ethical uses of AI research and technologies. Focused on responsible AI, this group works on developing theories and applications which promote respect for human rights while providing societal benefits in support of a broader agenda of social justice and inclusion. The group leads public discussion on ethical issues such as the use of AI in autonomous weapons systems. Â
Projects include: models of ethical reasoning in robotics; machine learning to measure progress towards the SDGs; e-safety through understanding online extremism;Â identifying gender bias in legal judgements; recommender systems to support access to healthcare and social welfare services.Â
Our partners
- Allens Hub for Technology, Law and Innovation
- Black Dog Institute
- BPS (Statistics Indonesia)
- Infoxchange
- LexisNexis
- mothers2mothers
- Pulse Lab Jakarta
- STIS (Politeknik Statistika)
- UNSW Law
- UNSW Science