Students

PHIX3400 – Rights, Responsibilities, and AI

2025 – Session 2, Online-flexible

General Information

Download as PDF
Unit convenor and teaching staff Unit convenor and teaching staff Unit Convenor, Lecturer, Tutor
Regina Fabry
Lecturer
Xiaohan Yu
Lecturer
Yanqiu Wu
Lecturer
Niloufer Selvadurai
Tutor
Bhanuraj Kashyap
Credit points Credit points
10
Prerequisites Prerequisites
130cp at 1000 level or above
Corequisites Corequisites
Co-badged status Co-badged status
PHIL8400; PHIL3400
Unit description Unit description

With increasing entrenchment of AI in human affairs, its scientific, moral, political, economic, and other social aspects are becoming a significant issue. For instance, there is a significant concern that machine learning algorithms contribute to the discrimination against members of oppressed groups (e.g., women, people of colour). This unit, co-designed and co-taught by relevant experts in Computing, Philosophy, and cognate disciplines, will present and discuss key theoretical, ethical, and empirical questions about the conditions of explainable, safe, fair, and responsible AI. Furthermore, it will explore scientific, ethical,  political, economic, and other social implications of topical issues such as algorithmic decision making, applications of deep learning models, and robot rights. Students will be exposed to ideas such as balancing risks and responsibilities, both in the scientific and moral sense, in the context of the evolving AI technologies.

Important Academic Dates

Information about important academic dates including deadlines for withdrawing from units are available at https://www.mq.edu.au/study/calendar-of-dates

Learning Outcomes

On successful completion of this unit, you will be able to:

  • ULO1: Explain the fundamental principles underlying AI, and the normative constraints that it needs to satisfy.
  • ULO2: Demonstrate an understanding of the ethical and other socioeconomic implications of AI.
  • ULO3: Demonstrate an understanding of what Responsible AI means, or will mean, in our current as well future society.
  • ULO4: Critically reflect on the use of AI in relevant fields.

General Assessment Information

Unless a Special Consideration request has been submitted and approved, a 5% penalty (of the total possible mark) will be applied each day a written assessment is not submitted, up until the 7th day (including weekends). After the 7th day, a mark of ‘0’ (zero) will be awarded even if the assessment is submitted. Submission time for all written assessments is set at 11.55pm. A 1-hour grace period is provided to students who experience a technical issue. This late penalty will apply to written reports and recordings only. Late submission of time sensitive tasks (such as tests/exams, performance assessments/presentations, scheduled practical assessments/labs) will be addressed by the unit convenor in a Special consideration application.

GenAI/ChatGPT

In this Unit, and unless notified otherwise in writing by the Unit Convenor, substantive assessment content that has been generated by AI may be regarded as not the student’s own work. This applies to all assessments, including online forums. In submitting assessments in this unit, all students will be required to confirm their agreement with the following:

In submitting this assessment, I certify that this submission is my own work and demonstrates my own understanding, analysis, research, reflection, critical thinking, and writing. I am not submitting anything that I cannot myself fully explain and defend, if called upon to do so. I understand that if my teachers have concerns about whether this submission is my own work, I may be required to attend an interview with the Unit Convenor/Integrity Officer/academic staff to verify my research methods, my understanding of the content, and my close familiarity with all sources I have cited. 

Assessment Tasks

Name Weighting Hurdle Due
Reflective task 35% No 31/08/2025 at 11:55 PM
Media presentation 20% No 12/10/2025 at 11:55 PM
Research essay 45% No 09/11/2025 at 11:55 PM

Reflective task

Assessment Type 1: Reflective Writing
Indicative Time on Task 2: 30 hours
Due: 31/08/2025 at 11:55 PM
Weighting: 35%

 

Present arguments and defend your own view on a topic from the unit.

 


On successful completion you will be able to:
  • Explain the fundamental principles underlying AI, and the normative constraints that it needs to satisfy.
  • Demonstrate an understanding of the ethical and other socioeconomic implications of AI.
  • Demonstrate an understanding of what Responsible AI means, or will mean, in our current as well future society.
  • Critically reflect on the use of AI in relevant fields.

Media presentation

Assessment Type 1: Media presentation
Indicative Time on Task 2: 18 hours
Due: 12/10/2025 at 11:55 PM
Weighting: 20%

 

Media presentation

 


On successful completion you will be able to:
  • Explain the fundamental principles underlying AI, and the normative constraints that it needs to satisfy.
  • Demonstrate an understanding of the ethical and other socioeconomic implications of AI.
  • Demonstrate an understanding of what Responsible AI means, or will mean, in our current as well future society.

Research essay

Assessment Type 1: Essay
Indicative Time on Task 2: 35 hours
Due: 09/11/2025 at 11:55 PM
Weighting: 45%

 

Research essay on a topic from the unit

 


On successful completion you will be able to:
  • Explain the fundamental principles underlying AI, and the normative constraints that it needs to satisfy.
  • Demonstrate an understanding of the ethical and other socioeconomic implications of AI.
  • Demonstrate an understanding of what Responsible AI means, or will mean, in our current as well future society.
  • Critically reflect on the use of AI in relevant fields.

1 If you need help with your assignment, please contact:

  • the academic teaching staff in your unit for guidance in understanding or completing this type of assessment
  • the Writing Centre for academic skills support.

2 Indicative time-on-task is an estimate of the time required for completion of the assessment task and is subject to individual variation

Delivery and Resources

Delivery: All lectures are delivered live; echo recordings are available via iLearn. Online forums are available in iLearn.

Resources: All required readings are provided in iLearn and Leganto. You must read the required readings before class.

Unit Schedule

W1 – Introduction (Dr Regina Fabry) – 28 July 2025

  • No readings
  • No Online Forum

 

W2 – Ethics and Robotics (Dr Xiaohan Yu) – 4 August 2025

  • Reading 1: Birhane, A., & van Dijk, J. (2020). Robot rights? Let’s talk about human welfare instead. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 207–213. https://doi.org/10.1145/3375627.3375855
  • Reading 2: Vanderelst, D., & Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Architectures for Artificial Minds48, 56–66. https://doi.org/10.1016/j.cogsys.2017.04.002
  • Online Forum 1

 

W3 – Algorithmic Decision Making (Dr Xiaohan Yu) – 11 August 2025

  • Reading 1: Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1), 2053951719897945. https://doi.org/10.1177/2053951719897945
  • Reading 2: Lim, H. S., & Taeihagh, A. (2019). Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities. Sustainability11(20). https://doi.org/10.3390/su11205791
  • Online Forum 2

 

W4 – AI and Safety (Dr Yanqiu (Autumn) Wu) – 18 August 2025

  • Reading 1: Limarga, R., Song, Y., Nayak, A., Rajaratnam, D., & Pagnucco, M. (2024). Formalisation and evaluation of properties for consequentialist machine ethics. In K. Larson (Ed.), Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24 (pp. 440–448). International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2024/49
  • Reading 2: Burton, S., Habli, I., Lawton, T., McDermid, J., Morgan, P., & Porter, Z. (2020). Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artificial Intelligence279, 103201. https://doi.org/10.1016/j.artint.2019.103201
  • Online Forum 3

 

W5 – Moral Responsibility of AI Researchers (Dr Xiaohan Yu) – 25 August 2025

  • Reading 1: Freedman, R., Borg, J. S., Sinnott-Armstrong, W., Dickerson, J. P., & Conitzer, V. (2020). Adapting a kidney exchange algorithm to align with human values. Artificial Intelligence283, 103261. https://doi.org/10.1016/j.artint.2020.103261
  • Reading 2: Schaich Borg, J. (2022). The AI field needs translational Ethical AI research. AI Magazine43(3), 294–307. https://doi.org/10.1002/aaai.12062
  • Online Forum 4
  • Assignment 1 (Reflective Task)

 

W6 – The Regulation of AI (Prof Niloufer Selvadurai) – 1 September 2025

  • Reading 1: Gacutan, J., & Selvadurai, N. (2020). A statutory right to explanation for decisions generated using artificial intelligence. International Journal of Law and Information Technology28(3), 193–216. https://doi.org/10.1093/ijlit/eaaa016
  • Reading 2: Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: Regulatory competition for artificial intelligence. Law, Innovation and Technology13(1), 57–84. https://doi.org/10.1080/17579961.2021.1898300
  • Online Forum 5

 

W7 – Ethical/Social AI Frameworks (Dr Regina Fabry) – 8 September 2025

  • Reading 1: Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
  • Reading 2: Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
  • Online Forum 6

 

W8 – Power, Politics, and AI (Dr Regina Fabry) – 15 September 2025

  • Reading 1: Lazar, S. (2022). Power and AI: Nature and justification. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford Handbook of AI Governance. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.12
  • Reading 2: Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society6, 1–19. https://doi.org/10.17351/ests2020.277 
  • Online Forum 7

 

RECESS FROM 22 SEPTEMBER TO 3 OCTOBER 2025

 

W9 – What Is AI After All? The Turing Test Revisited (Dr Regina Fabry) – 6 October 2025

  • Reading 1: Proudfoot, D. (2013). Rethinking Turing’s Test. The Journal of Philosophy110(7), 391–411. https://doi.org/10.5840/jphil2013110722
  • Reading 2: Wheeler, M. (2020). Deceptive Appearances: The Turing Test, Response-Dependence, and Intelligence as an Emotional Concept. Minds and Machines30(4), 513–532. https://doi.org/10.1007/s11023-020-09533-8
  • Online Forum 8
  • Assignment 2 (Media presentation)

 

W10 – Explainable AI (Dr Regina Fabry) – 13 October 2025

  • Reading 1: Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology34(2), 265–288. https://doi.org/10.1007/s13347-019-00382-7
  • Reading 2: Russo, F., Schliesser, E., & Wagemans, J. (2024). Connecting ethics and epistemology of AI. AI & SOCIETY39(4), 1585–1603. https://doi.org/10.1007/s00146-022-01617-6
  • Online Forum 9

 

W11 – Equitable AI (Dr Regina Fabry) – 20 October 2025

  • Reading 1: Cossette-Lefebvre, H., & Maclure, J. (2023). AI’s fairness problem: Understanding wrongful discrimination in the context of automated decision-making. AI and Ethics3(4), 1255–1269. https://doi.org/10.1007/s43681-022-00233-w
  • Reading 2: Kasirzadeh, A. (2022). Algorithmic fairness and structural injustice: Insights from feminist political philosophy. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 349–356. https://doi.org/10.1145/3514094.3534188
  • Online Forum 10

 

W12 – Trustworthy AI? The Case of Chatbots (Dr Regina Fabry) – 27 October 2025

  • Reading 1: Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  • Reading 2: Heersmink, R., de Rooij, B., Clavel Vázquez, M. J., & Colombo, M. (2024). A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness. Ethics and Information Technology26(3), 41. https://doi.org/10.1007/s10676-024-09777-3
  • No Online Forum

 

W13 – Writing and Review

  • No Readings
  • No Lecture
  • No Online Forum
  • Assignment 3 (Research Essay)

Policies and Procedures

Macquarie University policies and procedures are accessible from Policy Central (https://policies.mq.edu.au). Students should be aware of the following policies in particular with regard to Learning and Teaching:

Students seeking more policy resources can visit Student Policies (https://students.mq.edu.au/support/study/policies). It is your one-stop-shop for the key policies you need to know about throughout your undergraduate student journey.

To find other policies relating to Teaching and Learning, visit Policy Central (https://policies.mq.edu.au) and use the search tool.

Student Code of Conduct

Macquarie University students have a responsibility to be familiar with the Student Code of Conduct: https://students.mq.edu.au/admin/other-resources/student-conduct

Results

Results published on platform other than eStudent, (eg. iLearn, Coursera etc.) or released directly by your Unit Convenor, are not confirmed as they are subject to final approval by the University. Once approved, final results will be sent to your student email address and will be made available in eStudent. For more information visit connect.mq.edu.au or if you are a Global MBA student contact globalmba.support@mq.edu.au

Academic Integrity

At Macquarie, we believe academic integrity – honesty, respect, trust, responsibility, fairness and courage – is at the core of learning, teaching and research. We recognise that meeting the expectations required to complete your assessments can be challenging. So, we offer you a range of resources and services to help you reach your potential, including free online writing and maths support, academic skills development and wellbeing consultations.

Student Support

Macquarie University provides a range of support services for students. For details, visit http://students.mq.edu.au/support/

Academic Success

Academic Success provides resources to develop your English language proficiency, academic writing, and communication skills.

The Library provides online and face to face support to help you find and use relevant information resources. 

Student Services and Support

Macquarie University offers a range of Student Support Services including:

Student Enquiries

Got a question? Ask us via the Service Connect Portal, or contact Service Connect.

IT Help

For help with University computer systems and technology, visit http://www.mq.edu.au/about_us/offices_and_units/information_technology/help/

When using the University's IT, you must adhere to the Acceptable Use of IT Resources Policy. The policy applies to all who connect to the MQ network including students.


Unit information based on version 2025.04 of the Handbook