Data-driven machine learning (ML) and Artificial Intelligence (AI) tools and services now quietly power countless automated decision making, and predictive processes, across university business areas and throughout the student life cycle. While ML/AI applications can be responsibly deployed to advance student equity interests, if adopted uncritically, they can also amplify social inequalities and historical injustice, often by stealth. Given the rapid proliferation of these technologies and processes throughout the University it is increasingly important that we have principles and processes in place to protect institutional and national student equity interests and goals. While ML/AI applications can potentially work in the student’s own learning interests, they also present a demonstrable and, perhaps, insidious threat to the project of student equity. Moreover, it is increasingly difficult for non-specialist university leaders and decision-makers to anticipate how the ML/AI applications their institutions adopt may be working to undermine their own strongly held commitments to student equity and diversity. The proprietary nature of commercial ML/AI products and services can also serve to frustrate a university’s attempts to audit the impact of these processes on equity students.The broad field of data science is itself undergoing a significant reckoning with its own complicity in perpetuating social discrimination and disadvantage through ML/AI processes. It is now widely recognised that the ML/AI production ‘pipeline’ involves a series of critical choices by which discriminatory outcomes may be introduced at any stage: data collection and preparation, algorithmic training, model evaluation, and final deployment. This has led to an explosion of research in the emerging fields of Fairness, Accountability and Transparency in Machine Learning (FATML) and Explainable Artificial Intelligence (XAI). To protect student equity interests in the era of advanced data analytics, this research offers the beginnings of a conceptual and interdisciplinary framework of guiding principles for the equitable use of data analytics in Australian universities. We argue that the proliferation of ML/AI applications is among the most pressing emerging issues currently facing student equity interests in Australian higher education. When deployed within the rich human contexts of universities, ensuring ML/AI ‘fairness’ is more than a technical challenge, it calls for the continuous negotiation and articulation of competing visions of ‘equality’ and ‘the good.’ Drawing from the fields of information justice and ethics of technology, this project seeks to provide a conceptual and interdisciplinary framework that will aid institutions and individuals in the utilization of ML/AI applications while minimizing the risk of unintended consequences.